cat <<EOF | oc apply -f -
apiVersion: v1
kind: Route
metadata:
name: cdi-uploadproxy
namespace: kube-system
spec:
to:
kind: Service
name: cdi-uploadproxy
tls:
termination: passthrough
EOF
Container-native Virtualization is an add-on to OpenShift Container Platform that allows virtual machine workloads to run and be managed alongside container workloads. You can create virtual machines from disk images imported using the containerized data importer (CDI) controller, or from scratch within OpenShift Container Platform.
Container-native Virtualization introduces two new objects to OpenShift Container Platform:
Virtual Machine: The virtual machine in OpenShift Container Platform
Virtual Machine Instance: A running instance of the virtual machine
With the Container-native Virtualization add-on, virtual machines run in pods and have the same network and storage capabilities as standard pods.
Existing virtual machine disks are imported into persistent volumes (PVs), which are made accessible to Container-native Virtualization virtual machines using persistent volume claims (PVCs). In OpenShift Container Platform, the virtual machine object can be modified or replaced as needed, without affecting the persistent data stored on the PV.
|
Container-native Virtualization is currently a Technology Preview feature. For details about Red Hat support for Container-native Virtualization, see the Container-native Virtualization - Technology Preview Support Policy. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information on Red Hat Technology Preview features support scope, see https://access.redhat.com/support/offerings/techpreview/. |
The oc client is a command-line utility for
managing OpenShift Container Platform resources. The
following table contains the oc commands that you use with Container-native Virtualization.
| Command | Description |
|---|---|
|
Display a list of objects for the specified object type in the project. |
|
Display details of the specific resource. |
|
Create a resource from a filename or from stdin. |
|
Process a template into a configuration file. Templates have ``parameters'', which may either be generated on creation or set by the user, as well as metadata describing the template. |
|
Apply a configuration to a resource by filename or stdin. |
See the
OpenShift
Container Platform CLI Reference Guide, or run the oc --help command,
for definitive information on the OpenShift Container Platform client.
The virtctl client is a command-line utility for managing Container-native Virtualization resources. The following table contains the virtctl commands used throughout this document.
| Command | Description |
|---|---|
|
Start a virtual machine, creating a virtual machine instance. |
|
Stop a virtual machine instance. |
|
Create a service that forwards a designated port of a virtual machine or virtual machine instance and expose the service on the specified port of the node. |
|
Connect to a serial console of a virtual machine instance. |
|
Open a VNC connection to a virtual machine instance. |
|
Upload a virtual machine disk from a client machine to the cluster. |
Before you modify objects using the shell or web console, ensure you use the correct project. In the shell, use the following commands:
| Command | Description |
|---|---|
|
List all available projects. The current project is marked with an asterisk. |
|
Switch to another project. |
|
Create a new project. |
In the Web Console click the Project list and select the appropriate project or create a new one.
You can use virtctl image-upload to upload a virtual machine disk image from
a client machine to your OpenShift Container Platform cluster. This will create a PVC that can be
associated with a virtual machine after the upload has completed.
A virtual machine disk image, in RAW or QCOW2 format. It may be compressed
using xz or gzip.
kubevirt-virtctl must be installed on the client machine.
Identify the following items:
file location of the VM disk image you want to upload
name and size desired for the resulting PVC
Expose the cdi-uploadproxy service so that you can upload data to your cluster:
cat <<EOF | oc apply -f -
apiVersion: v1
kind: Route
metadata:
name: cdi-uploadproxy
namespace: kube-system
spec:
to:
kind: Service
name: cdi-uploadproxy
tls:
termination: passthrough
EOF
Use the virtctl image-upload command to begin uploading your VM image,
making sure to include your chosen parameters. For example:
$ virtctl image-upload --uploadproxy-url=https://$(oc get route cdi-uploadproxy -o=jsonpath='{.status.ingress[0].host}')/v1alpha1/upload --pvc-name=upload-pvc --pvc-size=10Gi --image-path=/images/fedora28.qcow2
|
When the upload is complete, the PVC will be created as specified. |
To verify that the PVC was created, view all PVC objects:
$ oc get pvc
Next, you can create a virtual machine object to bind to the PVC.
DataVolume objects provide orchestration of import, clone, and upload operations associated with an underlying PVC. DataVolumes are integrated with KubeVirt and they can prevent a virtual machine from being started before the PVC has been prepared.
The virtual machine disk can be RAW or QCOW2 format and may be compressed
using xz or gz. The disk image must be available at either an HTTP or S3
endpoint.
Identify an HTTP or S3 file server that hosts the virtual disk
image that you want to import. You need the complete URL in the correct format:
s3://bucketName/fileName
If your data source requires authentication credentials, edit the
endpoint-secret.yaml file and apply it to the cluster:
apiVersion: v1
kind: Secret
metadata:
name: endpoint-secret
labels:
app: containerized-data-importer
type: Opaque
data:
accessKeyId: "" # <optional: your key or user name, base64 encoded>
secretKey: "" # <optional: your secret or password, base64 encoded>
$ oc apply -f endpoint-secret.yaml
Edit the VM configuration file, optionally including the
secretRef parameter. In our example, we used a Fedora image:
apiVersion: kubevirt.io/v1alpha2
kind: VirtualMachine
metadata:
creationTimestamp: null
labels:
kubevirt.io/vm: vm-fedora-datavolume
name: vm-fedora-datavolume
spec:
dataVolumeTemplates:
- metadata:
creationTimestamp: null
name: fedora-dv
spec:
pvc:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: local
source:
http:
url: https://download.fedoraproject.org/pub/fedora/linux/releases/28/Cloud/x86_64/images/Fedora-Cloud-Base-28-1.1.x86_64.qcow2
secretRef: "" # Optional
status: {}
running: false
template:
metadata:
creationTimestamp: null
labels:
kubevirt.io/vm: vm-fedora-datavolume
spec:
domain:
devices:
disks:
- disk:
bus: virtio
name: datavolumedisk1
volumeName: datavolumevolume1
machine:
type: ""
resources:
requests:
memory: 64M
terminationGracePeriodSeconds: 0
volumes:
- dataVolume:
name: fedora-dv
name: datavolumevolume1
status: {}
Create the virtual machine:
$ oc create -f vm-<name>-datavolume.yaml
The virtual machine and a DataVolume will now be created. The CDI controller
creates an underlying PVC with the correct annotation and begins the import
process. When the import completes, the DataVolume status will change to
Succeeded and the virtual machine will be allowed to start.
DataVolume provisioning happens in the background, so there is no need to monitor it. You can start the VM and it will only run when the import is complete.
Run $ oc get pods and look for the importer pod. This pod
downloads the image from the specified URL and stores it on the provisioned PV.
Monitor the DataVolume status until it shows Succeeded. In our example, we
would run the following command:
$ oc describe dv <vm-fedora-datavolume>
To verify that provisioning is complete and that the VMI has started, try accessing its serial console:
$ virtctl console vm-fedora-datavolume
The process of importing a virtual machine disk is handled by the CDI
controller. When a PVC is created with special
cdi.kubevirt.io/storage.import annotations, the controller creates a
short-lived import pod that attaches to the PV and downloads the virtual
disk image into the PV.
The virtual machine disk can be RAW or QCOW2 format and may be compressed
using xz or gzip. The disk image must be available at either an HTTP or S3
endpoint.
| For locally provisioned storage, the PV needs to be created before the PVC. This is not required for OpenShift Container Storage, for which the PVs are created dynamically. |
Identify an HTTP or S3 file server hosting the virtual disk
image that you would like to import. You will need the complete URL, in
either format:
s3://bucketName/fileName
You will use this URL as the cdi.kubevirt.io/storage.import.endpoint
annotation value in your PVC configuration file.
For example: cdi.kubevirt.io/storage.import.endpoint:
https://download.fedoraproject.org/pub/fedora/linux/releases/28/Cloud/x86_64/images/Fedora-Cloud-Base-28-1.1.x86_64.qcow2
If the file server requires authentication credentials, edit the
endpoint-secret.yaml file:
apiVersion: v1
kind: Secret
metadata:
name: endpoint-secret
labels:
app: containerized-data-importer
type: Opaque
data:
accessKeyId: "" # <optional: your key or user name, base64 encoded>
secretKey: "" # <optional: your secret or password, base64 encoded>
Save the value of metadata.name to use with the
cdi.kubevirt.io/storage.import.secret annotation in your PVC
configuration file.
For example: cdi.kubevirt.io/storage.import.secret:
endpoint-secret
Apply endpoint-secret.yaml to the cluster:
$ oc apply -f endpoint-secret.yaml
Edit the PVC configuration file, making sure to include the required annotations.
For example:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: "example-vmdisk-volume"
labels:
app: containerized-data-importer
annotations:
cdi.kubevirt.io/storage.import.endpoint: "https://download.fedoraproject.org/pub/fedora/linux/releases/28/Cloud/x86_64/images/Fedora-Cloud-Base-28-1.1.x86_64.qcow2"
cdi.kubevirt.io/storage.import.secret: "endpoint-secret"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
Create the PVC using the oc CLI:
$ oc create -f <data-import.yaml>
After the disk image has been successfully imported into the PV, the import pod expires, and you can bind the PVC to a virtual machine object within OpenShift Container Platform.
Next, create a virtual machine object to bind to the PVC.
| This procedure has been deprecated from 1.3 onwards. |
The Import Virtual Machine Ansible playbook has the option to import a
virtual machine as a template object, which you can use to create virtual
machines.
Templates are useful when you want to create multiple virtual machines from the same base image with minor changes to resource parameters.
A cluster running OpenShift Container Platform 3.11
Container-native Virtualization version 1.3
Ensure you are in the correct project. If not, click the Project drop-down menu and select the appropriate project or create a new one.
Click the Catalog button on the side menu.
Click the Virtualization tab to filter the catalog.
Click Import Virtual Machine and click Next.
Select Import as a template from URL and click Next.
Enter the required parameters. For example:
Add to Project: template-test OpenShift Admin Username: cnv-admin OpenShift Admin Password: password ReType OpenShift Admin Password: password Disk Image URL: https://download.fedoraproject.org/pub/fedora/linux/releases/28/Cloud/x86_64/images/Fedora-Cloud-Base-28-1.1.x86_64.qcow2 Operating system type: linux Template Name: fedora Number of Cores: 1 Memory (MiB): 1024 Disk Size (GiB) (leave at 0 to auto detect size): 0 Storage Class (leave empty to use the default storage class):
Click Create to begin importing the virtual machine.
A temporary pod, with the generated names of importer-template-<name>-dv-01-<random>, is built to handle the process of importing
the data and creating the template. Upon completion, this temporary pod is
discarded and the <name>-template (fedora-template in the previous step)
will be visible in the catalog and can be used to create virtual machines.
| You may need to refresh your browser to see the template upon completion. This is due to a limitation of the Template Service Broker. |
You can create a new virtual machine that clones the PVC of an existing virtual machine into a new DataVolume. Referencing a dataVolumeTemplate in the virtual machine spec, the source PVC is cloned to a new DataVolume, which is then automatically used for the creation of the virtual machine.
To clone a DataVolume, examine it to identify the associated PVC. Use the details of the identified PVC as the source.
| When a DataVolume is created as part of the DataVolumeTemplate of a virtual machine, the lifecycle of the DataVolume is then dependent on the virtual machine: If the virtual machine is deleted, the DataVolume and associated PVC will also be deleted. |
A PVC of an existing virtual machine disk. The associated virtual machine must be powered down, or the clone process will be queued until the PVC is available.
Create a YAML file for a VirtualMachine object. The following virtual machine example, vm-dv-clone, clones my-favorite-vm-disk (located in the source-namespace namespace) and creates the 2Gi favorite-clone DataVolume, referenced in the virtual machine as the dv-clone volume.
For example:
apiVersion: kubevirt.io/v1alpha2
kind: VirtualMachine
metadata:
labels:
kubevirt.io/vm: vm-dv-clone
name: vm-dv-clone
spec:
running: false
template:
metadata:
labels:
kubevirt.io/vm: vm-dv-clone
spec:
domain:
devices:
disks:
- disk:
bus: virtio
name: registry-disk
volumeName: root
resources:
requests:
memory: 64M
volumes:
- dataVolume:
name: favorite-clone
name: root
dataVolumeTemplates:
- metadata:
name: favorite-clone
spec:
pvc:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
source:
pvc:
namespace: "source-namespace"
name: "my-favorite-vm-disk"
Create the virtual machine with the PVC-cloned DataVolume:
oc create -f <vm-clone-dvt>.yaml
You can clone a PVC of an existing virtual machine disk into a new DataVolume. The new DataVolume can then be used for a new virtual machine.
To clone a DataVolume, examine it to identify the associated PVC. Use the details of the identified PVC as the source.
| When a DataVolume is created independently of a virtual machine, the lifecycle of the DataVolume is independent of the virtual machine: If the virtual machine is deleted, neither the DataVolume nor its associated PVC will be deleted. |
A PVC of an existing virtual machine disk. The associated virtual machine must be powered down, or the clone process will be queued until the PVC is available.
Create a YAML file for a DataVolume object that specifies the following parameters:
metadata: name
|
The name of the new DataVolume |
source: pvc: namespace
|
The namespace in which the source PVC exists |
source: pvc: name
|
The name of the source PVC |
storage
|
The size of the new DataVolume. Be sure to allocate enough space or the cloning operation will not complete. The size should be the same or larger as the source PVC. For example: apiVersion: cdi.kubevirt.io/v1alpha1
kind: DataVolume
metadata:
name: cloner-datavolume
spec:
source:
pvc:
namespace: "source-namespace"
name: "my-favorite-vm-disk"
pvc:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
|
Start the PVC clone by creating the DataVolume:
oc create -f <datavolume>.yaml
DataVolumes prevent a virtual machine from being started before the PVC has been prepared so you can create a virtual machine that references the new DataVolume while the PVC is being cloned.
The Container-native Virtualization user interface is a specific build of the OpenShift Container Platform web console that contains the core features needed for virtualization use cases, including a virtualization navigation item.
kubevirt-web-ui is installed by default during the kubevirt-apb
deployment.
|
You can create a new virtual machine from the Container-native Virtualization web console. The VM can be configured with an interactive wizard, or you can bring your own YAML file.
A cluster running OpenShift Container Platform 3.11 or newer
Container-native Virtualization, version 1.3 or newer
Access the web UI at
kubevirt-web-ui.your.app.subdomain.host.com. Log in by using your
OpenShift Container Platform credentials.
Open the Workloads menu and select the Virtual Machines menu item. The list of available virtual machines will be shown.
Click Create Virtual Machine and you will be presented with two options:
| Create with YAML |
allows you to paste and submit a YAML file describing the VM. |
| Create with Wizard |
takes you through the process of VM creation. |
Select Create with Wizard. Input the desired VM name, description, and namespace where you want the VM to be created.
Next, select the provisioning source from the following options:
PXE
|
The virtual machine will be booted over the network. The network interface and logical network will be configured later in the Networking tab. |
URL
|
Provide the address of a disk image accessible from
OpenShift Container Platform. RAW and QCOW2 formats are supported. Either
format may be compressed with |
Registry
|
Provide a container containing a bootable operating system in a
registry accessible from OpenShift Container Platform. A
|
Template
|
Create a new virtual machine from a VM template that you imported
by using the |
Next, select the operating system, flavor, and workload profile for the VM. If you want to use cloud-init or start the virtual machine on creation, select those options. Then, proceed to the next screen.
On the networking screen, you can create, delete or rearrange
network interfaces. The interfaces can be connected to the logical
network using NetworkAttachmentDefinition. If you don’t need to configure
networking, you can skip this step and create the VM.
On the storage screen, you can create, delete or rearrange the virtual machine disks. Please note that if you want to attach a PVC from this screen, it must already exist. If you don’t need to configure storage, you can skip this step and create the VM.
Once you have created the VM, it will be visible under Workloads > Virtual Machines. You can start the new VM from the "cog" icon to the left of the VM entry. To see additional details about the VM, click its name.
To interact with the operating system, click the Consoles tab. This will connect to the VNC console of the virtual machine.
The spec object of the VirtualMachine configuration file references
the virtual machine settings, such as the number of cores and the amount
of memory, the disk type, and the volumes to use.
Attach the virtual machine disk to the virtual machine by referencing
the relevant PVC claimName as a volume.
|
ReplicaSet is not currently supported in Container-native Virtualization. |
See the Reference section for information about volume types and sample configuration files.
| Setting | Description |
|---|---|
cores |
The number of cores inside the virtual machine. Must be a value greater than or equal to 1. |
memory |
The amount of RAM allocated to the virtual machine by the node. Specify the denomination with `M' for Megabyte or `Gi' for Gigabyte. |
disks: volumeName |
The Name of the volume which is referenced. Must match the name of a volume. |
| Setting | Description |
|---|---|
name |
The Name of the volume. Must be a DNS_LABEL and unique within the virtual machine. |
persistentVolumeClaim |
The PVC to attach to the virtual machine. The |
See the kubevirt API Reference for a definitive list of virtual machine settings.
To create a virtual machine with the OpenShift Container Platform client:
$ oc create -f <vm.yaml>
Virtual machines are created in a stopped state. Run a virtual machine instance by starting it.
Templates can be used to create virtual machines, removing the need to download a disk image for each virtual machine. The PVC created for the template is cloned, allowing you to change the resource parameters for each new virtual machine.
A cluster running OpenShift Container Platform 3.11 or newer
Container-native Virtualization version 1.3 or newer
Ensure you are in the correct project. If not, click the Project drop-down
menu and select the appropriate project or create a new one.
Click the Catalog button on the side menu.
Click the Virtualization tab to filter the catalog.
Select the template and click Next.
Enter the required parameters. For example:
Add to Project: template-test NAME: fedora-1 MEMORY: 2048Mi CPU_CORES: 2
Click Next.
Choose whether or not to create a binding for the virtual machine. Bindings create a secret containing the necessary information for another application to use the virtual machine service. Bindings can also be added after the virtual machine has been created.
Click Create to begin creating the virtual machine.
Temporary pods with the generated names of clone-source-pod-<random> and clone-target-pod-<random>,
are built, in the template project and the virtual machine project respectively, to handle the creation of the virtual machine and the corresponding PVC.
The PVC is given a generated name of vm-<vm-name>-disk-01 or vm-<vm-name>-dv-01. Upon completion, the
temporary pods are discarded and the virtual machine (fedora-1 in the above example)
is ready in a Stopped state.
|
The virtctl console command opens a serial console to the specified virtual
machine instance.
The virtual machine instance you want to access must be running
Connect to the serial console with virtctl:
$ virtctl console <VMI>
The virtctl client utility can use remote-viewer to open a graphical console
to a running virtual machine instance. This is installed with the virt-viewer
package.
virt-viewer must be installed.
The virtual machine instance you want to access must be running.
|
If you use |
Connect to the graphical interface with the virtctl utility:
$ virtctl vnc <VMI>
If this command is successful, you will now be connected to the graphical interface.
If the command failed, try using the -v flag to collect
troubleshooting information:
$ virtctl vnc <VMI> -v 4
You can use SSH to access a virtual machine, but first you must expose port 22 on the VM.
The virtctl expose command forwards a virtual machine instance port to a node
port and creates a service for enabled access. The following example creates
the fedora-vm-ssh service which forwards port 22 of the fedora-vm virtual
machine to a port on the node:
The virtual machine instance you want to access must be running.
Run the following command to create the fedora-vm-ssh service:
$ virtctl expose vm fedora-vm --port=20022 --target-port=22 --name=fedora-vm-ssh --type=NodePort
Check the service to find out which port the service acquired:
$ oc get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE fedora-vm-ssh NodePort 127.0.0.1 <none> 20022:32551/TCP 6s
Log into the virtual machine instance via SSH, using the ipAddress of the
node and the port that you found in Step 2:
$ ssh username@<node IP> -p 32551
Virtual machines can be started and stopped, depending on the current state of the virtual machine. The option to restart VMs is available in the Web Console only.
The virtctl client utility is used to change the state of the virtual
machine, open virtual console sessions with the virtual
machines, and expose virtual machine ports as services.
The virtctl syntax is: virtctl <action> <VM-name>
You can only control objects in the project you are currently working
in, unless you specify the -n <project_name> option.
Examples:
$ virtctl start example-vm
$ virtctl stop example-vm
oc get vm lists the virtual machines in the project. oc get vmi
lists running virtual machine instances.
When you delete a virtual machine, the PVC it uses is unbound. If you do not plan to bind this PVC to a different VM, delete it, too.
You can only delete objects in the project you are currently working in,
unless you specify the -n <project_name> option.
$ oc delete vm fedora-vm
$ oc delete pvc fedora-vm-pvc
With Container-native Virtualization, you can connect a virtual machine instance to an Open vSwitch bridge configured on the node.
A cluster running OpenShift Container Platform 3.11 or newer
Prepare the cluster host networks (optional).
If the host network needs additional configuration changes, such as bonding, please refer to the Red Hat Enterprise Linux networking guide.
Configure interfaces and bridges on all cluster hosts.
On each node, choose an interface connected to the desired network. Then, create an Open vSwitch bridge and specify the interface you chose as the bridge’s port.
In this example, we create bridge br1 and connect it to interface
eth1. This bridge must be configured on all nodes. If it is only
available on a subset of nodes, make sure that VMIs have nodeSelector
constraints in place.
Any connections to eth1 will be lost once the interface is
assigned to the bridge, so another interface should exist on the host.
|
$ ovs-vsctl add-br br1
$ ovs-vsctl add-port br1 eth1
$ ovs-vsctl show
8d004495-ea9a-44e1-b00c-3b65648dae5f
Bridge br1
Port br1
Interface br1
type: internal
Port "eth1"
Interface "eth1"
ovs_version: "2.8.1"
Configure the network on the cluster.
L2 networks are treated as cluster-wide resources. The network should be
defined in a network attachment definition YAML file. You can define the
network using the NetworkAttachmentDefinition CRD.
The NetworkAttachmentDefinition CRD object contains information about
pod-to-network attachment. In the following example, there is an
attachment to Open vSwitch bridge br1 and traffic is tagged to VLAN
100.
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: vlan-100-net-conf
spec:
config: '{
"cniVersion": "0.3.1",
"type": "ovs",
"bridge": "br1",
"vlan": 100
}'
"vlan" is optional. If omitted, the VMI will be attached
through a trunk.
|
Edit the virtual machine instance configuration file to include the details of the interface and network.
Specify that the network is connected to the previously created
NetworkAttachmentDefinition. In this scenario, vlan-100-net is
connected to the NetworkAttachmentDefinition called
vlan-100-net-conf:
networks:
- name: default
pod: {}
- name: vlan-100-net
multus:
networkName: vlan-100-net-conf
Once you start the VMI, it should have the eth0 interface connected to
the default cluster network and eth1 connected to VLAN 100 using
bridge br1 on the node running the VMI.
PXE booting, or network booting, is supported in Container-native Virtualization. Network booting allows a computer to boot and load an operating system or other program without requiring a locally attached storage device. For example, you can use it to choose your desired OS image from a PXE server when deploying a new host.
We have provided a configuration file template for PXE booting in the Reference section of this article.
A cluster running OpenShift Container Platform 3.11 or newer
A configured interface that allows PXE booting
Configure a PXE network on the cluster:
Create NetworkAttachmentDefinition of PXE network pxe-net-conf:
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: pxe-net-conf
spec:
config: '{
"cniVersion": "0.3.1",
"type": "ovs",
"bridge": "br1"
}'
In this example, the VMI will be attached through a trunk port
to the Open vSwitch bridge br1.
|
Create Open vSwitch bridge br1 and connect it to interface eth1,
which is connected to a network that allows for PXE booting:
$ ovs-vsctl add-br br1
$ ovs-vsctl add-port br1 eth1
$ ovs-vsctl show
8d004495-ea9a-44e1-b00c-3b65648dae5f
Bridge br1
Port br1
Interface br1
type: internal
Port "eth1"
Interface "eth1"
ovs_version: "2.8.1"
This bridge must be configured on all nodes. If it is only
available on a subset of nodes, make sure that VMIs have nodeSelector
constraints in place.
|
Edit the virtual machine instance configuration file to include the details of the interface and network.
Specify the network and MAC address, if required by the PXE server. If the MAC address is not specified, a value will be assigned automatically. However, please note that at this time, MAC addresses assigned automatically will not be persistent.
Ensure that bootOrder is set to 1 so that the interface boots first.
In this example, the interface is connected to a network called
pxe-net:
interfaces:
- bridge: {}
name: default
- bridge: {}
name: pxe-net
macAddress: de:00:00:00:00:de
bootOrder: 1
| Boot order is global for interfaces and disks. |
Assign a boot device number to the disk to ensure proper booting after OS provisioning.
Set the disk bootOrder value to 2:
devices:
disks:
- disk:
bus: virtio
name: registrydisk
volumeName: registryvolume
bootOrder: 2
Specify that the network is connected to the previously created
NetworkAttachmentDefinition. In this scenario, pxe-net is connected
to the NetworkAttachmentDefinition called pxe-net-conf:
networks:
- name: default
pod: {}
- name: pxe-net
multus:
networkName: pxe-net-conf
Create the virtual machine instance:
$ oc create -f vmi-pxe-boot.yaml virtualmachineinstance.kubevirt.io "vmi-pxe-boot" created
Wait for the virtual machine instance to run:
$ oc get vmi vmi-pxe-boot -o yaml | grep -i phase phase: Running
View the virtual machine instance using VNC:
$ virtctl vnc vmi-pxe-boot
Watch the boot screen to verify that the PXE boot is successful.
Login to the VMI:
$ virtctl console vmi-pxe-boot
Verify the interfaces and MAC address on the VM. The interface
connected to the bridge will have the specified MAC address. In this
case, we used eth1 for the PXE boot, without an IP address. The other
interface, eth0, got an IP address from OpenShift Container Platform.
$ ip addr ... 3. eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether de:00:00:00:00:de brd ff:ff:ff:ff:ff:ff
If your virtual workload requires more memory than available, you can use memory overcommitment to allocate all or most of the host’s memory to your virtual machine instances. Enabling memory overcommitment means you can maximize resources that are normally reserved for the host.
For example, if the host has 32 GB RAM, you can leverage memory overcommitment to fit 8 VMs with 4 GB RAM each. This works under the assumption that the VMs will not use all of their memory at the same time.
A cluster running OpenShift Container Platform 3.11 or newer
To explicitly tell the VMI that it has more memory available than what
has been requested from the cluster, set spec.domain.memory.guest to a
higher value than spec.domain.resources.requests.memory. This process
is called memory overcommitment.
In this example, 1024M is requested from the cluster, but the VMI is told that it has 2048M available. As long as there is enough free memory available on the node, the VMI will consume up to 2048M.
kind: VirtualMachine
spec:
template:
domain:
resources:
requests:
memory: 1024M
memory:
guest: 2048M
| The same eviction rules as those for pods apply to the VMI if the node gets under memory pressure. |
| This procedure is only useful in certain use-cases and should only be attempted by advanced users. |
A small amount of memory is requested by each virtual machine instance in
addition to the amount that you request. This additional memory is used for
the infrastructure wrapping each VirtualMachineInstance process.
Though it is not usually advisable, it is possible to increase the VMI density on the node by disabling guest memory overhead accounting.
A cluster running OpenShift Container Platform 3.11 or newer
To disable guest memory overhead accounting, edit the YAML configuration
file and set the overcommitGuestOverhead value to true. This parameter is
disabled by default.
kind: VirtualMachine
spec:
template:
domain:
resources:
overcommitGuestOverhead: true
requests:
memory: 1024M
If overcommitGuestOverhead is enabled, it adds the guest overhead
to memory limits (if present).
|
OpenShift Container Platform events are records of important life-cycle information in a project and are useful for monitoring and troubleshooting resource scheduling, creation, and deletion issues.
To retrieve the events for the project, run:
$ oc get events
Events are also included in the resource description, which you can retrieve by using the OpenShift Container Platform client.
$ oc describe <resource_type> <resource_name> $ oc describe vm fedora-vm $ oc describe vmi fedora-vm $ oc describe pod virt-launcher-fedora-vm-zzftf
Resource descriptions also include configuration, scheduling, and status details.
Logs are collected for OpenShift Container Platform builds, deployments, and pods. Virtual machine logs can be retrieved from the virtual machine launcher pod.
$ oc logs virt-launcher-fedora-vm-zzftf
The -f option follows the log output in real time, which is useful for
monitoring progress and error checking.
If the launcher pod is failing to start, you may need to use the
--previous option to see the logs of the last attempt.
|
|
OpenShift Container Platform Metrics collects memory, CPU, and network performance information for nodes, components, and containers in the cluster. The specific information collected depends on how the Metrics subsystem is configured. For more information on configuring Metrics, see the OpenShift Container Platform Configuring Clusters Guide.
The oc CLI command adm top uses the Heapster API to fetch
data about the current state of pods and nodes in the cluster.
To retrieve metrics for a pod:
$ oc adm top pod <pod_name>
To retrieve metrics for the nodes in the cluster:
$ oc adm top node
The OpenShift Container Platform web console can represent metric information graphically over a time range.
ephemeral
|
A local copy-on-write (COW) image that uses a network volume as a
read-only backing store. The backing volume
must be a |
persistentVolumeClaim
|
Attaches an available PV to a virtual machine. This allows for the virtual machine data to persist between sessions. Importing an existing virtual machine disk into a PVC using CDI and attaching the PVC to a virtual machine instance is the recommended method for importing existing virtual machines into OpenShift Container Platform. There are some requirements for the disk to be used within a PVC. |
dataVolume
|
DataVolumes build on the |
cloudInitNoCloud
|
Attaches a disk containing the referenced cloud-init NoCloud data source, providing user data and metadata to the virtual machine. A proper cloud-init installation is required inside the virtual machine disk. |
registryDisk
|
References an image, such as a virtual machine disk, that is stored in
the container image registry. The image is pulled from the registry and
embedded in a volume when the virtual machine is created. A
Registry disks are not limited to a single virtual machine, and are useful for creating large numbers of virtual machine clones that do not require persistent storage. Only RAW and QCOW2 formats are supported disk types for the container image registry. QCOW2 is recommended for reduced image size. |
emptyDisk
|
Creates an additional sparse QCOW2 disk that is tied to the life-cycle of the virtual machine interface. The data survives guest-initiated reboots in the virtual machine but is discarded when the virtual machine stops or is restarted from the web console. The empty disk is used to store application dependencies and data that would otherwise exceed the limited temporary file system of an ephemeral disk. The disk |
pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: "example-vmdisk-volume"
labels:
app: containerized-data-importer
annotations:
kubevirt.io/storage.import.endpoint: "" # Required. Format: (http||s3)://www.myUrl.com/path/to/data
kubevirt.io/storage.import.secretName: "" # Optional. The name of the secret containing credentials for the data source
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
vm.yaml
apiVersion: kubevirt.io/v1alpha2
kind: VirtualMachine
metadata:
creationTimestamp: null
labels:
kubevirt-vm: fedora-vm
name: fedora-vm
spec:
running: false
template:
metadata:
creationTimestamp: null
labels:
kubevirt.io/domain: fedora-vm
spec:
domain:
devices:
disks:
- disk:
bus: virtio
name: registrydisk
volumeName: root
- disk:
bus: virtio
name: cloudinitdisk
volumeName: cloudinitvolume
machine:
type: ""
resources:
requests:
memory: 1Gi
terminationGracePeriodSeconds: 0
volumes:
- cloudInitNoCloud:
userData: |-
#cloud-config
password: fedora
chpasswd: { expire: False }
name: cloudinitvolume
- name: root
persistentVolumeClaim:
claimName: example-vmdisk-volume
status: {}
example-vm-dv.yaml
apiVersion: kubevirt.io/v1alpha2
kind: VirtualMachine
metadata:
labels:
kubevirt.io/vm: example-vm
name: example-vm
spec:
dataVolumeTemplates:
- metadata:
name: example-dv
spec:
pvc:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1G
source:
http:
url: "https://download.fedoraproject.org/pub/fedora/linux/releases/28/Cloud/x86_64/images/Fedora-Cloud-Base-28-1.1.x86_64.qcow2"
running: false
template:
metadata:
labels:
kubevirt.io/vm: example-vm
spec:
domain:
cpu:
cores: 1
devices:
disks:
- disk:
bus: virtio
name: disk0
volumeName: example-datavolume
machine:
type: q35
resources:
requests:
memory: 1G
terminationGracePeriodSeconds: 0
volumes:
- dataVolume:
name: example-dv
name: example-datavolume
example-import-dv.yaml
apiVersion: cdi.kubevirt.io/v1alpha1
kind: DataVolume
metadata:
name: "example-import-dv"
spec:
source:
http:
url: "https://download.fedoraproject.org/pub/fedora/linux/releases/28/Cloud/x86_64/images/Fedora-Cloud-Base-28-1.1.x86_64.qcow2" # Or S3
secretRef: "" # Optional
pvc:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: "1G"
example-clone-dv.yaml
apiVersion: cdi.kubevirt.io/v1alpha1
kind: DataVolume
metadata:
name: "example-clone-dv"
spec:
source:
pvc:
name: source-pvc
namespace: example-ns
pvc:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: "1G"
vmi-pxe-boot.yaml
apiVersion: kubevirt.io/v1alpha2
kind: VirtualMachineInstance
metadata:
creationTimestamp: null
labels:
special: vmi-pxe-boot
name: vmi-pxe-boot
spec:
domain:
devices:
disks:
- disk:
bus: virtio
name: registrydisk
volumeName: registryvolume
bootOrder: 2
- disk:
bus: virtio
name: cloudinitdisk
volumeName: cloudinitvolume
interfaces:
- bridge: {}
name: default
- bridge: {}
name: pxe-net
macAddress: de:00:00:00:00:de
bootOrder: 1
machine:
type: ""
resources:
requests:
memory: 1024M
networks:
- name: default
pod: {}
- multus:
networkName: pxe-net-conf
name: pxe-net
terminationGracePeriodSeconds: 0
volumes:
- name: registryvolume
registryDisk:
image: kubevirt/fedora-cloud-registry-disk-demo
- cloudInitNoCloud:
userData: |
#!/bin/bash
echo "fedora" | passwd fedora --stdin
name: cloudinitvolume
status: {}